درگاه امن پرداخت
برای تمامی سفارشات
ما یادگیری را ساده و کارآمد میکنیم اطلاعات بیشتر
بررسی دوره ما در سیارهای با میلیاردها نفر زندگی میکنیم اما همچنین میلیاردها کامپیوتر، که بسیاری از آنها طوری برنامهریزی شدهاند که درست مانند انسانها ارزیابی کنند و تصمیمگیری کنند. هنوز در کنار ماشینهای واقعاً هوشمند زندگی نمیکنیم، اما آنها در حال نزدیک شدن به این وضعیت هستند و دانستن چگونگی یادگیری ماشینها برای همه، از حرفهایها و دانشجویان گرفته تا شهروندان عادی، حیاتی است. از طریق ابزارها و شیوههایی از تشخیص پزشکی و مدیریت دادهها گرفته تا تولید گفتار و موتورهای جستجو یادگیری ماشین در فرهنگ ما بهطور گستردهای نفوذ کرده است. یادگیری ماشین که یکی از شاخههای هوش مصنوعی است، برنامهریزی را یک گام فراتر از نقش سنتی کامپیوترها در پردازشهای دادههای روتین، مانند زمانبندی، حسابداری و انجام محاسبات برده است. اکنون کامپیوترها بهگونهای برنامهریزی شدهاند که خودشان راهحلهای مسألهها را پیدا کنند مسایلی که آنقدر پیچیدهاند که انسانها اغلب نمیدانند از کجا شروع کنند. در حقیقت، یادگیری ماشین بهقدری پیشرفته شده است که حتی گاهی متخصصان نیز نمیدانند چگونه یک کامپیوتر به راهحلی که ارائه میدهد، میرسد. آشنایی با یادگیری ماشین این رشته انقلابی را در 25 درس عملی که توسط مدرّس و محقق برجسته، مایکل ال. لیتمن، استاد خانواده رویس در تدریس ممتاز علوم کامپیوتر در دانشگاه براون، تدریس میشود، روشن میکند. دکتر لیتمن شما را در تاریخ، مفاهیم و تکنیکهای یادگیری ماشین راهنمایی میکند و از زبان برنامهنویسی محبوب پایتون برای ارائه تجربه عملی با رایجترین برنامهها و کتابخانههای تخصصی استفاده میکند. برای کسانی که با پایتون آشنا نیستند، این دوره شامل یک درس است که بهعنوان یک آموزش اختصاصی برای چگونگی شروع با این زبان کارآمد و آساناستفاده طراحی شده است. استاد لیتمن تقریباً در هر درس یک نمایش پایتون گنجانده است. حتی اگر هرگز کدی در پایتون، یا هر زبان دیگری، ننوشتهاید، هنوز میتوانید این برنامهها را برای خودتان اجرا کنید تا قدرت شگفتانگیز یادگیری ماشین را احساس کنید. شروع کار با یادگیری ماشین این دوره با موسیقی الهامگرفته از باخ که توسط یک برنامه یادگیری ماشین ساخته شده، آغاز میشود. استاد لیتمن با نمایشی سرگرمکننده از فناوری به دوره خوشامد میگوید: ترنسکریپشن خودکار صدا، پیشبینی کلمات، پیری صورت، ترجمه زبان خارجی، شبیهسازی صدا و غیره. سپس به یک مثال واقعی میپردازد: چگونه از یادگیری ماشین برای شنیدن ضربان قلب و تشخیص بیماریهای قلبی استفاده شود. برنامههای کامپیوتری سنتی تنها طبق آنچه به آنها میگویید عمل میکنند و نرمافزارهای پزشکی معمولاً یک مجموعه از علائم را به تشخیصهای تثبیتشده مطابقت میدهند. اما مزیت یادگیری ماشین این است که کامپیوتر بهصورت آزادانه برای یافتن الگوهایی که ممکن است از چشم انسان پنهان مانده باشد، عمل میکند. این کار چگونه انجام میشود؟ استاد لیتمن شما را از طریق فرآیند، که با انتخاب یک “فضای نمایشی” شروع میشود، راهنمایی میکند توصیف رسمی که مشخص میکند چگونه به مسأله نزدیک شویم. فضای نمایشی شامل هر الگوریتمی است که برنامه یادگیری ماشین باید در نظر بگیرد. به آن فضای نمایشی میگویند زیرا شامل مجموعهای از احتمالات است که میتوان آن را بسته به دادهها و زمان موجود، بیشتر یا کمتر گسترده کرد. مرحله بعدی تعریف “تابع زیان” است که نحوه ارزیابی قواعد ممکن در فضای نمایشی را تعیین میکند؛ قواعد بهتر، امتیازهای بهتری دریافت میکنند. سرانجام، برنامهای به نام “بهینهساز” در فضای نمایشی جستجو میکند تا قواعدی را که نمرات خوبی دارند، بیابد. یکی یا چند تا از این قواعد به موفقترین راهحل برای مسأله تبدیل میشوند. جزئیات را بررسی کنید در مقدمهای بر یادگیری ماشین، شما سه نوع اصلی فضای نمایشی را بررسی میکنید که بر روی نوع مشکلاتی که بهخوبی میتوانند حل شوند، تمرکز دارد. درختهای تصمیم گیری: هر کسی که با منوی تلفن سر و کار داشته باشد، با یک درخت تصمیم مواجه شده است. “برای فروش، شماره 1 را بزنید. برای حسابها، شماره 2 را بزنید.” هر انتخاب با انتخابهای اضافی همراه است، تا زمانی که به فرد یا دپارتمان مورد نظر برسید. درختهای تصمیم گیری ابزاری طبیعی برای مشکلات یادگیری ماشین هستند که مانند بسیاری از تشخیصهای پزشکی به استدلال “if-then” نیاز دارند. شبکههای بیزی: در مقابل درختهای تصمیم گیری، که به یک رشته استنتاج تکیه دارند، شبکههای بیزی شامل استنتاج از احتمال هستند. آنها برای مواردی که باید از دادهها به علل احتمالی آنها برگردید، مناسب هستند. یک مثال بارز نرمافزاری است که پیامهای احتمالی هرزنامه را شناسایی میکند. شبکههای عصبی: طراحی شدهاند تا مانند نورونها در مغز کار کنند، شبکههای عصبی در تسک های ادراکی مانند شناسایی تصویر، پردازش زبان و طبقهبندی دادهها عالی هستند. شبکههای عصبی عمیق شامل شبکههای شبکهها هستند و قلب انقلاب “یادگیری عمیق” است که استاد لیتمن بهطور مفصل پوشش میدهد. شما به بررسی جزئیات هر یک از این استراتژیها و همچنین دامها، بهویژه یادآور تطابق، که زمانی است که یک قاعده خیلی خوب عمل میکند، میپردازید. یادآوری تطابق ممکن است خوب به نظر برسد، اما نشانهای است که قاعده خیلی به دادههای اصلی نزدیک شده و ممکن است در دادههای جدید که نیاز به درمان با قواعد عمومی دارند، خوب عمل نکند. استاد لیتمن توضیح میدهد که چگونه از این خطر دوری کنید و با مشکلات دیگری مانند سوگیریهای پنهان، نقصها در نمونهگیری و مثبتهای کاذب مقابله کنید. ماجراجوییهای کدنویسی خود را شروع کنید یکی دیگر از راههای طبقهبندی برنامههای یادگیری ماشین، مقدار ورودی انسانی موجود است. آیا برنامهنویس نتیجه دلخواه را مشخص میکند یا این کار را به کامپیوتر میسپارد یا این روش چیزی در بین است؟ این استراتژیهای مختلف عبارتند از: یادگیری تحت نظارت: در اینجا، پاسخ دلخواه توسط برنامهنویس بهعنوان یک مجموعه داده آموزشی ارائه میشود که بهعنوان معلمی برای هدایت فرآیند یادگیری عمل میکند. سیستمهای توصیهگر که کاربر یک محصول را ارزیابی میکند، به این صورت عمل میکند. همچنین تعدادی دیگر از برنامههای یادگیری ماشین که نمونهها با ویژگیهای مرتبط آنها برچسبگذاری شدهاند. یادگیری بدون نظارت: این رویکرد شبیه به نداشتن معلم است. هیچ پاسخ درستی وجود ندارد، فقط دادههای آموزشی که باید با دادههای آزمایشی در جستوجوی شباهت مقایسه شوند. سیستمهای توصیهگر داستانهای خبری معمولاً به این شیوه کار میکنند، زیرا مردم بهندرت اخبار را ارزیابی میکنند. یادگیری تقویتی: این استراتژی ترکیبی، سبک یادگیری ماشین مورد علاقه دکتر لیتمن است. بهعنوان مثال به آن فکر کنید، انگار به یک منتقد دسترسی دارید. به شما گفته نمیشود که چه کار کنید؛ شما فقط بازخوردی درباره عملکرد خود دریافت میکنید. مثالهای زیادی از یادگیری تقویتی در این دوره وجود دارد که شامل یک درس کامل درباره برنامههای بازی میشود. در طول این دوره فوقالعاده، شما بهعمق به کاربردهای یادگیری ماشین برای مسائل پیشرفته در تحقیق، آموزش، کسبوکار، سرگرمی و زندگی روزمره میپردازید. همچنین به پیامدهای اجتماعی یادگیری ماشین که با رشد تأثیر آن، بیشتر و بیشتر خواهد شد، توجه میکنید. دکتر لیتمن تأکید میکند که بر عهده هر کدام از ما هست که اطمینان حاصل کنیم این فناوری به شیوههایی به کار گرفته شود که به نفع همه ما باشد. بنابراین، بر عهده ماست که سواد یادگیری ماشین خود را افزایش دهیم. بسیاری از مردم این موضوع را بهعنوان یک جعبه سیاه در نظر میگیرند که در آن اتفاقات غیرقابل درک منجر به شگفتیهای فناورانه امروز میشود. خوشبختانه، استاد لیتمن استعداد فوقالعادهای در روشن کردن فرآیندهای غیرشفاف دارد و آنها را نه تنها روشن، بلکه جذاب میکند. مقدمهای بر یادگیری ماشین چشم شما را به این رشته هیجانانگیز باز میکند و از همه بهتر، راهی برای ماجراجوییهای کدنویسی شما در یادگیری ماشین هموار میکند.
دسته بندی | ویدیوهای آموزشی |
---|---|
سطح دوره | متوسط |
نوع محصول | تک محصول |
زبان | زیرنویس_فارسی |
حجم | 10.08 |
قیمت | 100000.00 ریال |
Course Overview We live on a planet with billions of people—but also billions of computers, many of them programmed to evaluate and make decisions much as humans do. We don’t yet reside among truly intelligent machines, but they are getting there, and knowing how machines learn is crucial for everyone from professionals to students to ordinary citizens. Machine learning pervades our culture in a multitude of ways, through tools and practices from medical diagnosis and data management to speech synthesis and search engines. An offshoot of artificial intelligence, machine learning takes programming a giant step beyond the traditional role of computers in routine data processing, such as scheduling, keeping accounts, and making calculations. Now computers are being programmed to figure out how to solve problems by themselves—problems that are so complex that humans often don’t know where to begin. Indeed, machine learning has become so advanced that, often, even the experts don’t know how a computer arrives at the solution it does. Introduction to Machine Learning demystifies this revolutionary discipline in 25 try-it-yourself lessons taught by award-winning educator and researcher Michael L. Littman, the Royce Family Professor of Teaching Excellence in Computer Science at Brown University. Dr. Littman guides you through the history, concepts, and techniques of machine learning, using the popular computer language Python to give you hands-on experience with the most widely used programs and specialized libraries . For those new to Python, this course includes a lecture that is a dedicated tutorial on how to get started with this versatile, easy-to-use language. Professor Littman includes approximately one Python demonstration in each lesson. Even if you have never written code in Python, or any language, you can still run these programs for yourself to get a feeling for the amazing power of machine learning. Get Started with Machine Learning Backed by Bach-inspired music composed by a machine learning program, Professor Littman opens the course with playful displays of the technology: automatic voice transcription, word prediction, face aging, foreign language translation, voice simulation, and more. Then he launches into a real-world example: how to use machine learning to listen to heartbeats and diagnose heart disease. Traditional computer programs only do what you tell them to, and medical software would typically match a set of symptoms to already well-established diagnoses. But the advantage of machine learning is that the computer is set loose to find patterns that may have escaped human observation. How does it do it? Professor Littman walks you through the process, which starts with choosing a “representational space”—a formal description that defines how to approach the problem. The representational space is the domain of all possible rules, or algorithms, which the machine-learning program should consider. It’s called a space because it encompasses an array of possibilities that can be made more or less expansive depending on the data and time available. The next step is defining the “loss function,” which determines how the possible rules in the representational space are assessed; better rules get better scores. Finally, a program called the “optimizer” rummages through the representational space to find the rules that score well. One or more of these rules become the preferred solution to the problem . Dig into the Details In Introduction to Machine Learning, you investigate three major types of representational spaces, focusing on the types of problems they excel at solving. Decision Trees: Anyone who has dealt with a phone menu has faced a decision tree. “For sales, press 1. For accounts, press 2.” Each choice is followed by additional choices, until you get the person or department you want. Decision trees are a natural fit for machine-learning problems that require “if-then” reasoning, such as many medical diagnoses. Bayesian Networks: In contrast to decision trees, which rely on a sequence of deductions, Bayesian networks involve inferences from probability. They are well-suited to cases where you need to work backwards from the data to their likely causes. A prominent example is software that identifies probable spam messages. Neural Networks: Designed to work like neurons in the brain, neural networks excel at perceptual tasks, such as image recognition, language processing, and data classification. Deep neural networks are composed of networks of networks and are the heart of the “deep learning” revolution that Professor Littman covers in detail. You delve into the mechanics of each of these strategies as well as their pitfalls, especially overfitting, which is when a rule works too well. Overfitting may sound like a good thing, but it is a sign that the rule is tailored too closely to the original data and may not work on new data that requires treatment with a general rule. Professor Littman explains how to steer clear of this hazard and deal with other problems, such as hidden biases, sampling flaws, and false positives. Embark on Your Own Coding Adventures Another way to classify machine learning programs is the degree of human input involved. Does the programmer specify a desired outcome or leave it to the computer—or is the approach something in between? These different strategies are: Supervised Learning: Here, the desired answer is supplied by the programmer as a training dataset that acts as a teacher to guide the learning process. Recommender systems where the user rates a product work like this. So do a host of other machine-learning programs where examples are labeled with their relevant attribute. Unsupervised Learning: This approach is like having no teacher at all. There is no right answer, just training data to be compared to test data in a search for similarity. News story recommender systems typically work this way, since people rarely rate the news. Reinforcement Learning: This hybrid strategy is Dr. Littman’s favorite style of machine learning. Think of it as like having access to a critic. You are not being told what to do; you are simply getting feedback on how well you did it. The many examples of reinforcement learning in this course include an entire lesson on game-playing programs. Throughout this extraordinary course, you dig deeply into the uses of machine learning for cutting-edge problems in research, education, business, entertainment, and daily life. You also consider the social implications of machine learning, which are likely to loom ever larger as its influence grows. Dr. Littman stresses that it’s up to each of us to ensure that this technology is applied in ways that benefit us all. Therefore, it’s up to us to boost our machine-learning literacy. Many people regard the subject as a black box, where inscrutable things happen that lead to today’s technological wonders. Fortunately, Professor Littman has a gift for making opaque processes not only clear, but captivating. Introduction to Machine Learning will open your eyes to this thrilling field and, better yet, pave the way for your own coding adventures in machine learning. Lectures Telling the Computer What We Want Professor Littman gives a bird’s-eye view of machine learning, covering its history, key concepts, terms, and techniques as a preview for the rest of the course. Look at a simple example involving medical diagnosis. Then focus on a machine-learning program for a video green screen, used widely in television and film. Contrast this with a traditional program to solve the same problem. Starting with Python Notebooks and Colab The demonstrations in this course use the Python programming language, the most popular and widely supported language in machine learning. Dr. Littman shows you how to run programming examples from your web browser, which avoids the need to install the software on your own computer, saving installation headaches and giving you more processing power than is available on a typical home computer. Decision Trees for Logical Rules Can machine learning beat a rhyming rule, taught in elementary school, for determining whether a word is spelled with an I-E or an E-I—as in “diet” and “weigh”? Discover that a decision tree is a convenient tool for approaching this problem. After experimenting, use Python to build a decision tree for predicting the likelihood for an individual to develop diabetes based on eight health factors. Neural Networks for Perceptual Rules Graduate to a more difficult class of problems: learning from images and auditory information. Here, it makes sense to address the task more or less the way the brain does, using a form of computation called a neural network. Explore the general characteristics of this powerful tool. Among the examples, compare decision-tree and neural-network approaches to recognizing handwritten digits. Opening the Black Box of a Neural Network Take a deeper dive into neural networks by working through a simple algorithm implemented in Python. Return to the green screen problem from the first lecture to build a learning algorithm that places the professor against a new backdrop. Bayesian Models for Probability Prediction A program need not understand the content of an email to know with high probability that it’s spam. Discover how machine learning does so with the Naïve Bayes approach, which is a simplified application of Bayes’ theorem to a simplified model of language generation. The technique illustrates a very useful strategy: going backwards from effects (in this case, words) to their causes (spam). Genetic Algorithms for Evolved Rules When you encounter a new type of problem and don’t yet know the best machine learning strategy to solve it, a ready first approach is a genetic algorithm. These programs apply the principles of evolution to artificial intelligence, employing natural selection over many generations to optimize your results. Analyze several examples, including finding where to aim. Nearest Neighbors for Using Similarity Simple to use and speedy to execute, the nearest neighbor algorithm works on the principle that adjacent elements in a dataset are likely to share similar characteristics. Try out this strategy for determining a comfortable combination of temperature and humidity in a house. Then dive into the problem of malware detection, seeing how the nearest neighbor rule can sort good software from bad. The Fundamental Pitfall of Overfitting Having covered the five fundamental classes of machine learning in the previous lessons, now focus on a risk common to all: overfitting. This is the tendency to model training data too well, which can harm the performance on the test data. Practice avoiding this problem using the diabetes dataset from lecture 3. Hear tips on telling the difference between real signals and spurious associations. Pitfalls in Applying Machine Learning Explore pitfalls that loom when applying machine learning algorithms to real-life problems. For example, see how survival statistics from a boating disaster can easily lead to false conclusions. Also, look at cases from medical care and law enforcement that reveal hidden biases in the way data is interpreted. Since an algorithm is doing the interpreting, understanding what is happening can be a challenge. Clustering and Semi-Supervised Learning See how a combination of labeled and unlabeled examples can be exploited in machine learning, specifically by using clustering to learn about the data before making use of the labeled examples. Recommendations with Three Types of Learning Recommender systems are ubiquitous, from book and movie tips to work aids for professionals. But how do they function? Look at three different approaches to this problem, focusing on Professor Littman’s dilemma as an expert reviewer for conference paper submissions, numbering in the thousands. Also, probe Netflix’s celebrated one-million-dollar prize for an improved recommender algorithm. Games with Reinforcement Learning In 1959, computer pioneer Arthur Samuel popularized the term “machine learning” for his checkers-playing program. Delve into strategies for the board game Othello as you investigate today’s sophisticated algorithms for improving play—at least for the machine. Also explore game-playing tactics for chess, Jeopardy!, poker, and Go, which have been a hotbed for machine-learning research. Deep Learning for Computer Vision Discover how the ImageNet challenge helped revive the field of neural networks through a technique called deep learning, which is ideal for tasks such as computer vision. Consider the problem of image recognition and the steps deep learning takes to solve it. Dr. Littman throws out his own challenge: Train a computer to distinguish foot files from cheese graters. Getting a Deep Learner Back on Track Roll up your sleeves and debug a deep-learning program. The software is a neural net classifier designed to separate pictures of animals and bugs. In this case, fix the bugs in the code to find the bugs in the images! Professor Littman walks you through diagnostic steps relating to the representational space, the loss function, and the optimizer. It’s an amazing feeling when you finally get the program working well. Text Categorization with Words as Vectors Previously, you saw how machine learning is used in spam filtering. Dig deeper into problems of language processing, such as how a computer guesses the word you are typing and possibly even badly misspelling. Focus on the concept of word embeddings, which “define” the meanings of words using vectors in high-dimensional space—a method that involves techniques from linear algebra. Deep Networks That Output Language Continue your study of machine learning and language by seeing how computers not only read text, but how they can also generate it. Explore the current state of machine translation, which rivals the skill of human translators. Also, learn how algorithms handle a game that Professor Littman played with his family, where a given phrase is expanded piecemeal to create a story. The results can be quite poetic! Making Stylistic Images with Deep Networks One way to think about the creative process is as a two-stage operation, involving an idea generator and a discriminator. Study two approaches to image generation using machine learning. In the first, a target image of a pig serves as the discriminator. In the second, the discriminator is programmed to recognize the general characteristics of a pig, which is more how people recognize objects. Making Photorealistic Images with GANs A new approach to image generation and discrimination pits both processes against each other in a “generative adversarial network,” or GAN. The technique can produce a new image based on a reference class, for example making a person look older or younger, or automatically filling in a landscape after a building has been removed. GANs have great potential for creativity and, unfortunately, fraud. Deep Learning for Speech Recognition Consider the problem of speech recognition and the quest, starting in the 1950s, to program computers for this task. Then delve into algorithms that machine-learning uses to create today’s sophisticated speech recognition systems. Get a taste of the technology by training with deep-learning software for recognizing simple words. Finally, look ahead to the prospect of conversing computers. Inverse Reinforcement Learning from People Are you no good at programming? Machine learning can a give a demonstration, predict what you want, and suggest improvements. For example, inverse reinforcement turns the tables on the following logical relation, “if you are a horse and like carrots, go to the carrot.” Inverse reinforcement looks at it like this: “if you see a horse go to the carrot, it might be because the horse likes carrots.” Causal Inference Comes to Machine Learning Get acquainted with a powerful new tool in machine learning, causal inference, which addresses a key limitation of classical methods—the focus on correlation to the exclusion of causation. Practice with a historic problem of causation: the link between cigarette smoking and cancer, which will always be obscured by confounding factors. Also look at other cases of correlation versus causation. The Unexpected Power of Over-Parameterization Probe the deep-learning revolution that took place around 2015, conquering worries about overfitting data due to the use of too many parameters. Dr. Littman sets the stage by taking you back to his undergraduate psychology class, taught by one of The Great Courses’ original professors. Chart the breakthrough that paved the way for deep networks that can tackle hard, real-world learning problems. Protecting Privacy within Machine Learning Machine learning is both a cause and a cure for privacy concerns. Hear about two notorious cases where de-identified data was unmasked. Then, step into the role of a computer security analyst, evaluating different threats, including pattern recognition and compromised medical records. Discover how to think like a digital snoop and evaluate different strategies for thwarting an attack. Mastering the Machine Learning Process Finish the course with a lightning tour of meta-learning—algorithms that learn how to learn, making it possible to solve problems that are otherwise unmanageable. Examine two approaches: one that reasons about discrete problems using satisfiability solvers and another that allows programmers to optimize continuous models. Close with a glimpse of the future for this astounding field.
دسته بندی | ویدیوهای آموزشی |
---|---|
سطح دوره | متوسط |
نوع محصول | تک محصول |
زبان | زیرنویس_فارسی |
حجم | 10.08 |
قیمت | 100000.00 ریال |
افزودن به سبد خرید |
برای تمامی سفارشات
حتی بیشتر از 7 روز
برای تمامی آیتم ها
پشتیبان آنلاین است